Complete the following and submit to Canvas before Oct 8 11:59PM,
Late work will recieve 0%,
Each assignment is worth the same,
Please get in contact with the instructor in plenty of time if you need help,
Before submitting your work, make sure to check everything runs as expected. Click Kernel > Restart Kernel and Run All Cells.
Feel free to add more cells to experiment or test your answers,
I encourage you to discuss the course material and assignment questions with your classmates. However, unless otherwise explicitly stated on the assignment, you must complete and write up your solutions on your own,
The use of GenAI is prohibited as outlined in the course syllabus. If I suspect you of cheating, you may be asked to complete a written or oral exam on the content of this assignment,
β indicates a question where a mathematical proof is required
π» indicates a question where numerical experiments are required
Enter your name here: YOUR NAME HERE
Approximate time spent on this assignment: β¦β¦.
β Do not edit the following cell. You may find some of these functions useful
# useful definitions that we've used so far:usingPlotsusingLaTeXStringsusingPolynomialsusingPrettyTablesfunctionsimple_iteration( g, x1; N=100, tol=1e-10 ) x = [ x1 ]for n in2:Npush!( x, g(x[n-1]) )if (abs(g(x[end]) - x[end]) < tol)breakelseif (x[end] ==Inf)@warn"simple iteration diverges to Inf";breakelseif (x[end] ==-Inf)@warn"simple iteration diverges to -Inf";breakendendreturn xendfunctionrelaxation( f, Ξ», x1; N=100, tol=1e-10) x = [x1] r =0.;for n in2:Npush!( x, x[n-1] -Ξ»*f(x[n-1]) ) r =abs(f(x[end]));if (r < tol)return xendend@warn"max interations with |f| = $r";return xendfunctionNewton( f, f_prime, x1; N=100, tol=1e-10) x = [x1]for n in2:Npush!( x, x[n-1] -f(x[n-1])/f_prime(x[n-1]) ) r =abs(f(x[end]));if (r < tol)return xendend@warn"max interations |f| = $r";return xendfunctionorderOfConvergence( x, ΞΎ; Ξ±=0 ) err = @. abs(x - ΞΎ) logerr = @. log10( err ) ratios = [NaN; [logerr[i+1] / logerr[i] for i in1:length(logerr)-1]]if (Ξ± ==0) Ξ± = ratios[end] Ξ±r =round(Ξ±, sigdigits=3)end mu = [NaN; [err[i+1] / err[i]^Ξ± for i in1:length(err)-1]]pretty_table( [1:length(x) err logerr ratios mu]; column_labels = ["iteration", "absolute error", "log error", "alpha", "mu (Ξ± = $Ξ±)" ] )endfunction ΞΌ( x, ΞΎ; Ξ±=1 ) return @. abs( x[2:end] - ΞΎ ) / ( abs(x[1:end-1] - ΞΎ )^Ξ± );endΟ΅ =1.;f = Ο -> Ο - Ο΅ *sin(Ο) -2Ο; # has a zero at 2Οdf = Ο ->1- Ο΅ *cos(Ο)ΞΎ =2Ο
6.283185307179586
g = x -> x^2-2dg = x ->2xorderOfConvergence( Newton( g, dg, 1. ), sqrt(2) )
with \epsilon = 0.9. Recall that we showed that the Relaxation Method (x_{n+1} = x_n - \lambda f(x_n)) converged linearly with asymptotic error constant \left| 1 - 0.1 \lambda \right| and Newtonβs Method converged faster than quadratically. For the remainder of this question, fix \epsilon = 1. Hereβs a plot of f:
Find a research paper explaining a method named after one of the following people: Halley, Householder, Osada, Ostrowski, Steffensen, Traub. What is the main novelty of this method? How does it (claim to) improve upon previous methods in the literature? Implement your chosen method and test it on a function of your choice.
Please clearly cite which paper you are describing.